Extracting 3D facial animation parameters from multiview video clips - Computer Graphics and Applications, IEEE

نویسندگان

  • I-Chen Lin
  • Jeng-Sheng Yeh
چکیده

synthetic face’s behaviors must precisely conform to those of a real one. However, facial surface points, being nonlinear and without rigid body properties, have quite complex action relations. During speaking and pronunciation, facial motion trajectories between articulations, called coarticulation effects, also prove nonlinear and depend on preceding and succeeding articulations. Performance-driven facial animation provides a direct and convincing approach to handling delicate human facial variations. This method animates a synthetic face using motion data captured from a performer. In modern computer graphics-based movies such as Final Fantasy, Shrek, and Toy Story, character motion designers used optical or magnetic motion trackers to capture markers’ 3D motion trajectories on a performer’s face. They can track only a limited number of markers without interference, however, and the dozen or so markers they can place on facial feature points only sparsely cover the whole face area. Therefore, to derive a vivid facial animation, animators must adjust for the uncovered areas. Other approaches, discussed in the “Related Work” sidebar, also present limitations in analyzing and synthesizing facial motion. To tackle this problem, we propose an accurate and inexpensive procedure that estimates 3D facial motion parameters from mirror-reflected multiview video clips. We place two planar mirrors near a subject’s cheeks and use a single camera to simultaneously capture markers’ front and side view images. We also propose a novel closed-form linear algorithm to reconstruct 3D positions from real versus mirrored point correspondences in an uncalibrated environment. Figure 1 shows such a reconstruction.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Extracting 3D Facial Animation Parameters from Multiview Video Clips

synthetic face’s behaviors must precisely conform to those of a real one. However, facial surface points, being nonlinear and without rigid body properties, have quite complex action relations. During speaking and pronunciation, facial motion trajectories between articulations, called coarticulation effects, also prove nonlinear and depend on preceding and succeeding articulations. Performance-...

متن کامل

MoCap : Automatic and efficient capture of dense 3 D facial motion parameters from video

M. Ouhyoung Dept. of Computer Science and Information Engineering, National Taiwan University, No.1 Roosevelt Rd. Sec. 4, Taipei, 106, Taiwan e-mail: [email protected] Abstract In this paper, we present an automatic and efficient approach to the capture of dense facial motion parameters, which extends our previous work of 3D reconstruction from mirror-reflected multiview video. To narrow sea...

متن کامل

Facial Capture and Animation in Visual Effects

In recent years, there has been increasing interest in facial animation research from both academia and the entertainment industry. Visual effects and video game companies both want to deliver new audience experiences – whether that is a hyper-realistic human character [Duncan 09] or a fantasy creature driven by a human performer [Duncan 10]. Having more efficient ways of delivering high qualit...

متن کامل

Analyzing Facial Expressions for Virtual Conferencing

In this paper we present a method for the estimation of three-dimensional motion from 2-D image sequences showing head and shoulder scenes typical for video telephone and tele-conferencing applications. We use a 3-D model that speci es the color and shape of the person in the video. Additionally, the model constrains the motion and deformation in the face to a set of facial expressions which ar...

متن کامل

Digital watermarking of MPEG-4 facial animation parameters

The enforcement of intellectual property rights is difficult for digital data. One possibility to support the enforcement is the embedding of digital watermarks containing information about copyright owner and/or receiver of the data. Watermarking methods have already been presented for audio, images, video, and polygonal 3D models. In this paper, we present a method for digital watermarking of...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001